Goto

Collaborating Authors

 Stormont, Dundas and Glengarry County


Dargana: fine-tuning EarthPT for dynamic tree canopy mapping from space

Smith, Michael J., Fleming, Luke, Geach, James E., Roberts, Ryan J., Kalaitzis, Freddie, Banister, James

arXiv.org Artificial Intelligence

Aspia Space A BSTRACT We present Dargana, a fine-tuned variant of the EarthPT time-series foundation model that achieves specialisation using < 3% of its pre-training data volume and 5% of its pre-training compute. Dargana is fine-tuned to generate regularly updated classification of tree canopy cover at 10 m resolution, distinguishing conifer and broadleaved tree types. Using Cornwall, UK, as a test case, the model achieves a pixel-level ROC-AUC of 0.98 and a PR-AUC of 0.83 on unseen satellite imagery. Dargana can identify fine structures like hedgerows and coppice below the training sample limit, and can track temporal changes to canopy cover such as new woodland establishment. Our results demonstrate how pre-trained Large Observation Models like EarthPT can be specialised for granular, dynamic land cover monitoring from space, providing a valuable, scalable tool for natural capital management and conservation.


Zero-Shot Warning Generation for Misinformative Multimodal Content

Delvecchio, Giovanni Pio, Nguyen, Huy Hong, Echizen, Isao

arXiv.org Artificial Intelligence

The widespread prevalence of misinformation poses significant societal concerns. Out-of-context misinformation, where authentic images are paired with false text, is particularly deceptive and easily misleads audiences. Most existing detection methods primarily evaluate image-text consistency but often lack sufficient explanations, which are essential for effectively debunking misinformation. We present a model that detects multimodal misinformation through cross-modality consistency checks, requiring minimal training time. Additionally, we propose a lightweight model that achieves competitive performance using only one-third of the parameters. We also introduce a dual-purpose zero-shot learning task for generating contextualized warnings, enabling automated debunking and enhancing user comprehension. Qualitative and human evaluations of the generated warnings highlight both the potential and limitations of our approach.


Least-to-Most Prompting Enables Complex Reasoning in Large Language Models

Zhou, Denny, Schärli, Nathanael, Hou, Le, Wei, Jason, Scales, Nathan, Wang, Xuezhi, Schuurmans, Dale, Cui, Claire, Bousquet, Olivier, Le, Quoc, Chi, Ed

arXiv.org Artificial Intelligence

Chain-of-thought prompting has demonstrated remarkable performance on various natural language reasoning tasks. However, it tends to perform poorly on tasks which requires solving problems harder than the exemplars shown in the prompts. To overcome this challenge of easy-to-hard generalization, we propose a novel prompting strategy, least-to-most prompting. The key idea in this strategy is to break down a complex problem into a series of simpler subproblems and then solve them in sequence. Solving each subproblem is facilitated by the answers to previously solved subproblems. Our experimental results on tasks related to symbolic manipulation, compositional generalization, and math reasoning reveal that least-to-most prompting is capable of generalizing to more difficult problems than those seen in the prompts. A notable finding is that when the GPT-3 code-davinci-002 model is used with least-to-most prompting, it can solve the compositional generalization benchmark SCAN in any split (including length split) with an accuracy of at least 99% using just 14 exemplars, compared to only 16% accuracy with chain-of-thought prompting. This is particularly noteworthy because neural-symbolic models in the literature that specialize in solving SCAN are trained on the entire training set containing over 15,000 examples. We have included prompts for all the tasks in the Appendix.


Brexit 'contingency planning' under way

BBC News

George Osborne has said contingency planning is taking place to anticipate the likely impact on the UK's financial stability of a vote to leave the EU. The chancellor told MPs there would be a "number of impacts" on the financial system that would have to be addressed. Economists have warned of market volatility and a sharp fall in sterling should there be a Leave vote. The chancellor said it would be up to the Bank of England to consider appropriate monetary responses. Mr Osborne, a key figure in the Remain campaign, has previously refused to be drawn on whether the Treasury and other public bodies were, in any way, preparing for the possibility of a Leave vote in the referendum on 23 June.